Re: [-empyre-] Accidents (was for example)



The Voices in my Head tell me that on 11/6/03 3:22 PM, Jim Andrews at
jim@vispo.com wrote:

> We have memories of roses, but presumably this doesn't necessitate having a
> real rose in the brain. Instead, the brain stores representations of things
> and ideas and relations, etc.

<snip>

and then Satellite Space Stations in Geosynchronous Orbit tell me that on
11/6/03 3:37 PM, Alan Sondheim at sondheim@panix.com wrote:

> 
> 
> I'm not suggesting obviously little roses in the brain, etc. I'm simply
> saying that 1. roses aren't represented, but when recalled are constructed
> (again Sartre etc.) and that 2. the reconstructing need not be
> algorithmic. I don't think the brain 'computes' in other words in any way
> similar to that of a computer; the model is awkward.

My position is more like Alan's, but from a completely different angle. I
had a sneaking suspicion the conversation would go this way, and it's
logical conclusion is in a question of AI, and given Mr Andrews choice of
terms and argument, fairly hard AI.

My opposition to AI is not in the idea of AI as a discipline of programming,
nor is it in the idea of thinking machines as a useful good. My opposition
is more conceptual and semantical.

Semantic failure:

The semantic failure of the AI model runs with linguistic reduction.

Mr Andrews:

> When we read text, we bring to the task an impressive volume of processing
> algorithms to parse and interpret the text even to the point where we
> sometimes create a silent voice in the mind. The process of reading text
> would be related to the process also of 'reading' images or sounds: again,
> we bring quite marvelous processing algorithms to these tasks. How could it
> be any other way?

Reading or speaking: we're using language. We don't have objects, the roses,
in our head, and this points up the very problem neatly. We *reduce* the
rose to a mental construct in our head. What we imagine or know about the
rose is very different from what the rose is, as we are limited to a certain
bandwidth of visible light, olfactory sensitivity, and tactile capacity. I
think we can all agree on that. Because we live on a planet of clocks, we
have many happy devices that tell us a lot more about Roses, such as their
appearance under UV radiation, nocturnal v. diurnal processes, sugar
consumption, etc. 

These devices give us data which we then add to our linguistic model of the
Rose, which become associated with our memory of the Rose.

But, as anyone with even half an ounce of sense can understand, and also
something to which I think we can all agree: all of these datapoints
collected do not constitute a Rose. A Rose is a Rose is a Rose...

Now, substitute Rose with Brain, or Mind, or Consciousness, or what have
you, and you are back to the same problem, which is an essentially Kantian
position of the thing in itself.

The application of this argument to AI would be Searle's Chinese Room
gedanken, which basically refutes AI on a conceptual basis. It goes like
this (and it refers rather abstrusely, but nicely, to some earlier
discussion about chinese ideograms):

You are in a room. There are two slits. Paper comes in slit A. It has a
question in Chinese, written in classic Chinese ideograms. You don't
understand chinese. At all. However, you have a book filled with questions.
You look atthe ideogram question, and look it up in the book. When you find
it, there is an answer, in Chinese. You pick up a brush, and write the
answer on another piece of paper. You submit the answer through slit B.

Everyone outside the room thinks you know Chinese. But: You Don't. That is
what computers do: they take stored charges of electricity held in patterns
in various gates, process and manipulate these patterns with other
electrical patterns, and then send out a pattern of electrical charges that
results from this processing. they don't think. They don't know. They might
provide some illusion of knowing, because the patterns we get from them
indicate it. But, in fact, it's an illusion.

But the room GAVE ME THE RIGHT ANSWER! It has to be thinking!
Eeeek!!! IT'S ALIVE!!!

Now reverse it: instead of using the failed language of consciousness to
describe a machine as intelligent, use the language of machines to fail in
describing the human brain as mechanical - as prone to use algorithms for
its operations, for instance.

How could it be any other way?

We don't know, and that is the best place for the inquiry to continue its
emergence: both artistic and scientific. We can see patterns emerge, and one
of these patterns might be a certain tendency to algorithmic behaviour. This
doesn't mean that the brain is algorithmic. It just means that it can
exhibit behaviour that we determine to be algorithmic.

The model (brain | machine) isn't awkward. It's dead - except among zealots
like Kurzweil and Minsky et al.

Also, Goedel's Incompleteness Theorem has a lot to say about this, but I
haven't the time right now to delve into that corner.

I think the greatest problem facing all of this is Kant. If the thing in
itself is unknowable, does it even exist? This metaphysic precedes
existentialism (by more than just history) because while existence precedes
essence, to know anything about something in order to know that it is
existing is to engage in a level of reduction from that which is.
(Herakleitos) Hence that which is existent exists prior to our knowing it
exists - immanence precedes "presence". Kind of like a hyperexistentialism.

=================================================
disclaimer: unwarranted speculation follows:
Or: to use a string theory physics model: curled up in a time dimension such
that our brains only get S1,2,3 and T1, but this T2 which permits immanence
is so tiny that we can't see it: hence: (my personal theory here): we are
*smeared* across hyperdimensional space time. Hence: we can't know it, but
it still exists, here, now, just at an angle we can't approach, but can be
described from other surrounding data points (once again, surreal math comes
in handy when you need to count things that don't exist...yet.)

For more on this, Hyperspace by Kaku, End of Time by Barbour
==================================================

This is why art is a form of discourse completely different from that of
writing. It makes manifest those little roses in the brain and puts them
there for you to look at and understand. Or provides you with noise that has
been shaped and filtered in a way that has value and or meaning. And that's
why the brain isn't algorithmic but can behave in a way that describes
algorithms. The brain is adept at seeing patterns, and is frequently
deceived into mistaking these patterns for the actual thing that is
generating these patterns, and thus symbolically projecting itself into
everything and everything into itself.

HW





This archive was generated by a fusion of Pipermail 0.09 (Mailman edition) and MHonArc 2.6.8.